199 research outputs found

    Digital Scotland, the relevance of library research and the Glasgow Digital Library Project

    Get PDF
    The Glasgow Digital Library (GDL) Project has a significance over and above its primary aim of creating a joint digital library for the citizens of Glasgow. It is also both an important building block in the development of a planned and co-ordinated 'virtual Scotland' and a rich environment for research into issues relevant to that enterprise. Its creation comes at a time of political, social, economic and cultural change in Scotland, and may be seen, at least in part, as a response to a developing Scottish focus in these areas, a key element of which is a new socially inclusive and digitally driven educational vision and strategy based on the Scottish traditions of meritocratic education, sharing and common enterprise, and a fiercely independent approach. The initiative is based at the Centre for Digital Library Research at Strathclyde University alongside a range of other projects of relevance both to the development of a coherent virtual landscape in Scotland and to the GDL itself, a supportive environment which allows it to draw upon the research results and staff expertise of other relevant projects for use in its own development and enables its relationship to virtual Scotland to be both explored and developed more readily. Although its primary aim is the creation of content (based initially on electronic resources created by the institutions, on public domain information, and on joint purchases and digitisation initiatives) the project will also investigate relationships between regional and national collaborative collection management programmes with SCONE (Scottish Collections Network Extension project) and relationships between regional and national distributed union catalogues with CAIRNS (Co-operative Academic Information Retrieval Network for Scotland) and COSMIC (Confederation of Scottish Mini-Clumps). It will also have to tackle issues associated with the management of co-operation

    CSI : A nonparametric Bayesian approach to network inference from multiple perturbed time series gene expression data

    Get PDF
    How an organism responds to the environmental challenges it faces is heavily influenced by its gene regulatory networks (GRNs). Whilst most methods for inferring GRNs from time series mRNA expression data are only able to cope with single time series (or single perturbations with biological replicates), it is becoming increasingly common for several time series to be generated under different experimental conditions. The CSI algorithm (Klemm, 2008) represents one approach to inferring GRNs from multiple time series data, which has previously been shown to perform well on a variety of datasets (Penfold and Wild, 2011). Another challenge in network inference is the identification of condition specific GRNs i.e., identifying how a GRN is rewired under different conditions or different individuals. The Hierarchical Causal Structure Identification (HCSI) algorithm (Penfold et al., 2012) is one approach that allows inference of condition specific networks (Hickman et al., 2013), that has been shown to be more accurate at reconstructing known networks than inference on the individual datasets alone. Here we describe a MATLAB implementation of CSI/HCSI that includes fast approximate solutions to CSI as well as Markov Chain Monte Carlo implementations of both CSI and HCSI, together with a user-friendly GUI, with the intention of making the analysis of networks from multiple perturbed time series datasets more accessible to the wider community.1 The GUI itself guides the user through each stage of the analysis, from loading in the data, to parameter selection and visualisation of networks, and can be launched by typing >> csi into the MATLAB command line. For each step of the analysis, links to documentation and tutorials are available within the GUI, which includes documentation on visualisation and interacting with output file

    Learning and Inference methodologies for Hybrid Dynamic Bayesian networks. A case study for a water reservoir system in Andalusia, Spain.

    Get PDF
    Time series analysis requires powerful and robust tools; at the same time the tools must be intuitive for users. Bayesian networks have been widely applied in static problem modelling, but, in some knowledge areas, Dynamic Bayesian networks are hardly known. Such is the case in the environmental sciences, where the application of static Bayesian networks in water resources research is notable, while fewer than five papers have been found in the literature for the dynamic extension. The aim of this paper is to show how Dynamic Bayesian networks can be applied in environmental sciences by means of a case study in water reservoir system management. Two approaches are applied and compared for model learning, and another two for inference. Despite slight differences in terms of model complexity and computational time, both approaches for model learning provide similar results. In the case of inference methods, again, there were slight differences in computational time, but the selection of one approach over the other is based on the prediction needed: If the aim is just to go one step forward, both Window and Roll out approaches are similar, when we need to go more than one step forward; the most appropriate will be Roll out

    Dynamic Time Warping Averaging of Time Series Allows Faster and More Accurate Classification

    Get PDF
    Recent years have seen significant progress in improving both the efficiency and effectiveness of time series classification. However, because the best solution is typically the Nearest Neighbor algorithm with the relatively expensive Dynamic Time Warping as the distance measure, successful deployments on resource constrained devices remain elusive. Moreover, the recent explosion of interest in wearable devices, which typically have limited computational resources, has created a growing need for very efficient classification algorithms. A commonly used technique to glean the benefits of the Nearest Neighbor algorithm, without inheriting its undesirable time complexity, is to use the Nearest Centroid algorithm. However, because of the unique properties of (most) time series data, the centroid typically does not resemble any of the instances, an unintuitive and underappreciated fact. In this work we show that we can exploit a recent result to allow meaningful averaging of 'warped' times series, and that this result allows us to create ultra-efficient Nearest 'Centroid' classifiers that are at least as accurate as their more lethargic Nearest Neighbor cousins

    Faster and more accurate classification of time series by exploiting a novel dynamic time warping averaging algorithm

    Get PDF
    A concerted research effort over the past two decades has heralded significant improvements in both the efficiency and effectiveness of time series classification. The consensus that has emerged in the community is that the best solution is a surprisingly simple one. In virtually all domains, the most accurate classifier is the nearest neighbor algorithm with dynamic time warping as the distance measure. The time complexity of dynamic time warping means that successful deployments on resource-constrained devices remain elusive. Moreover, the recent explosion of interest in wearable computing devices, which typically have limited computational resources, has greatly increased the need for very efficient classification algorithms. A classic technique to obtain the benefits of the nearest neighbor algorithm, without inheriting its undesirable time and space complexity, is to use the nearest centroid algorithm. Unfortunately, the unique properties of (most) time series data mean that the centroid typically does not resemble any of the instances, an unintuitive and underappreciated fact. In this paper we demonstrate that we can exploit a recent result by Petitjean et al. to allow meaningful averaging of “warped” time series, which then allows us to create super-efficient nearest “centroid” classifiers that are at least as accurate as their more computationally challenged nearest neighbor relatives. We demonstrate empirically the utility of our approach by comparing it to all the appropriate strawmen algorithms on the ubiquitous UCR Benchmarks and with a case study in supporting insect classification on resource-constrained sensors
    • …
    corecore